Goto

Collaborating Authors

 spatial relation induced network


SIRI: Spatial Relation Induced Network For Spatial Description Resolution

Neural Information Processing Systems

Spatial Description Resolution, as a language-guided localization task, is proposed for target location in a panoramic street view, given corresponding language descriptions. Explicitly characterizing an object-level relationship while distilling spatial relationships are currently absent but crucial to this task. Mimicking humans, who sequentially traverse spatial relationship words and objects with a first-person view to locate their target, we propose a novel spatial relationship induced (SIRI) network. Specifically, visual features are firstly correlated at an implicit object-level in a projected latent space; then they are distilled by each spatial relationship word, resulting in each differently activated feature representing each spatial relationship. Further, we introduce global position priors to fix the absence of positional information, which may result in global positional reasoning ambiguities. Both the linguistic and visual features are concatenated to finalize the target localization. Experimental results on the Touchdown show that our method is around 24\% better than the state-of-the-art method in terms of accuracy, measured by an 80-pixel radius. Our method also generalizes well on our proposed extended dataset collected using the same settings as Touchdown.


Review for NeurIPS paper: SIRI: Spatial Relation Induced Network For Spatial Description Resolution

Neural Information Processing Systems

Weaknesses: 1) The experiment is somewhat inadequate. In the paper, the author only compares the proposed SIRI approach to the baseline from original Touchdown dataset paper [2]. In fact, spatial description resolution is a similar task as referring expression or instruction grounding. It is necessary for the author to further compare to approaches (such as Mattnet [18] or other new methods in 2019) in those tasks. For example, although Mattnet is not designed for spatial description resolution, but there is also semantic and position modules to handle spatial relation and object relationship reasoning, which can be served as a substitute of Part I & II of SIRI.


Review for NeurIPS paper: SIRI: Spatial Relation Induced Network For Spatial Description Resolution

Neural Information Processing Systems

Paper was reviewed by four expert reviewers, with initial scores of: 6, 6, 6, 5. Reviewers acknowledge a commendable improvements of the proposed approach on a difficult and novel task. A number of issues where raised about the paper, including (1) poor exposition and language [all reviewers], (2) lack of comparison to FiLM [R1], (3) specificity of task and dataset [R2,R4], among others. Authors provided a rebuttal that was discussed by reviewers and ultimately convincing. Two of the reviewers upgraded their scores, resulting in unanimously positive, albeit marginally so, scores of: 7, 6, 6, 6. AC, despite having reservations about quality of the writing, mentioned by all reviewers, agrees that the approach is valuable and presents a significant improvement over state-of-the-art on a relatively unexplored problem. As such AC agrees with reviewers that the paper should be accepted.


SIRI: Spatial Relation Induced Network For Spatial Description Resolution

Neural Information Processing Systems

Spatial Description Resolution, as a language-guided localization task, is proposed for target location in a panoramic street view, given corresponding language descriptions. Explicitly characterizing an object-level relationship while distilling spatial relationships are currently absent but crucial to this task. Mimicking humans, who sequentially traverse spatial relationship words and objects with a first-person view to locate their target, we propose a novel spatial relationship induced (SIRI) network. Specifically, visual features are firstly correlated at an implicit object-level in a projected latent space; then they are distilled by each spatial relationship word, resulting in each differently activated feature representing each spatial relationship. Further, we introduce global position priors to fix the absence of positional information, which may result in global positional reasoning ambiguities. Both the linguistic and visual features are concatenated to finalize the target localization.